ordinal classification
- Asia > China > Beijing > Beijing (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Artificial Intelligence > Vision > Image Understanding (0.46)
- Asia > Middle East > Jordan (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area (0.46)
- Information Technology > Services (0.34)
- Asia > Middle East > Jordan (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area (0.67)
- Health & Medicine > Diagnostic Medicine (0.67)
Learning-to-Rank Meets Language: Boosting Language-Driven Ordering Alignment for Ordinal Classification
We present a novel language-driven ordering alignment method for ordinal classification. The labels in ordinal classification contain additional ordering relations, making them prone to overfitting when relying solely on training data. Recent developments in pre-trained vision-language models inspire us to leverage the rich ordinal priors in human language by converting the original task into a vision-language alignment task. Consequently, we propose L2RCLIP, which fully utilizes the language priors from two perspectives. First, we introduce a complementary prompt tuning technique called RankFormer, designed to enhance the ordering relation of original rank prompts. It employs token-level attention with residual-style prompt blending in the word embedding space. Second, to further incorporate language priors, we revisit the approximate bound optimization of vanilla cross-entropy loss and restructure it within the cross-modal embedding space. Consequently, we propose a cross-modal ordinal pairwise loss to refine the CLIP feature space, where texts and images maintain both semantic alignment and ordering alignment. Extensive experiments on three ordinal classification tasks, including facial age estimation, historical color image (HCI) classification, and aesthetic assessment demonstrate its promising performance.
Conformal Prediction Sets for Ordinal Classification
Ordinal classification (OC), i.e., labeling instances along classes with a natural ordering, is common in multiple applications such as size or budget based recommendations and disease severity labeling. Often in practical scenarios, it is desirable to obtain a small set of likely classes with a guaranteed high chance of including the true class. Recent works on conformal prediction (CP) address this problem for the classification setting with non-ordered labels but the resulting prediction sets (PS) are often non-contiguous and unsuitable for ordinal classification. In this work, we propose a framework to adapt existing CP methods to generate contiguous sets with guaranteed coverage and minimal cardinality. Our framework employs a novel non-parametric approach for modeling unimodal distributions. Empirical results on both synthetic and real-world datasets demonstrate our method outperforms SOTA baselines by 4% on Accuracy@K and 8% on PS size.
Provably Minimum-Length Conformal Prediction Sets for Ordinal Classification
Zhang, Zijian, Chen, Xinyu, Shi, Yuanjie, Ma, Liyuan Lillian, Xu, Zifan, Yan, Yan
Ordinal classification has been widely applied in many high-stakes applications, e.g., medical imaging and diagnosis, where reliable uncertainty quantification (UQ) is essential for decision making. Conformal prediction (CP) is a general UQ framework that provides statistically valid guarantees, which is especially useful in practice. However, prior ordinal CP methods mainly focus on heuristic algorithms or restrictively require the underlying model to predict a unimodal distribution over ordinal labels. Consequently, they provide limited insight into coverage-efficiency trade-offs, or a model-agnostic and distribution-free nature favored by CP methods. To this end, we fill this gap by propose an ordinal-CP method that is model-agnostic and provides instance-level optimal prediction intervals. Specifically, we formulate conformal ordinal classification as a minimum-length covering problem at the instance level. To solve this problem, we develop a sliding-window algorithm that is optimal on each calibration data, with only a linear time complexity in K, the number of label candidates. The local optimality per instance further also improves predictive efficiency in expectation. Moreover, we propose a length-regularized variant that shrinks prediction set size while preserving coverage. Experiments on four benchmark datasets from diverse domains are conducted to demonstrate the significantly improved predictive efficiency of the proposed methods over baselines (by 15% decrease on average over four datasets).
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Washington (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
Supplementary Materials of Learning-to-Rank Meets Language: Boosting Language-Driven Ordering Alignment for Ordinal Classification
The specifics of our experimental settings are subsequently outlined in Section 1.2. Additional Ablation Study. Figure 1 presents the embedding spaces corresponding to various Feature demarcations across different ranks are ambiguous, displaying considerable overlap. This underscores the criticality of their synergistic implementation. We also explore the role of different batchsize settings. We report the result in Table 1.
- Asia > China > Beijing > Beijing (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Artificial Intelligence > Vision > Image Understanding (0.46)
- Asia > Middle East > Jordan (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- (2 more...)
- Asia > Middle East > Jordan (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area (0.67)
- Health & Medicine > Diagnostic Medicine (0.67)